DMFFNet: Dual-Mode Multi-Scale Feature Fusion-Based Pedestrian Detection Method

نویسندگان

چکیده

Most contemporary pedestrian detection algorithms are based on visible light image detection. However, in environments with dim light, small targets, and easily occluded cluttered backgrounds, single-mode images relying color, texture, other features cannot adequately represent the feature information of targets; as a result, large number targets lost algorithm performance is not good. To address this problem, we propose dual-modal multi-scale fusion network (DMFFNet). First, use MobileNet v3 backbone to extract input for attention (MFA) module, combining idea mechanism. Second, deeply fuse output by MFA double deep (DDFF) module enhance semantic geometric target. Finally, optimize loss function reflect distance between predicted box real more realistically well ability toward predicting difficult samples. We performed multi-directional evaluations KAIST dual-light dataset visible-thermal infrared (VTI) our laboratory through comparative ablation experiments. The overall MR-2 9.26%, partial occlusion, severe occlusion 5.17%, 23.35%, 47.31%, respectively. VIT results show that performs detection, especially when target was occluded.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A detection method of artificial area from high resolution remote sensing images based on multi scale and multi feature fusion

In order to solve the problem of automatic detection of artificial objects in high resolution remote sensing images, a method for detection of artificial areas in high resolution remote sensing images based on multi-scale and multi feature fusion is proposed. Firstly, the geometric features such as corner, straight line and right angle are extracted from the original resolution, and the pseudo ...

متن کامل

A Method of Video Flame Detection Based on Multi-Feature Fusion

A method of video flame detection based on multi-feature fusion is presented in this paper. Physical characteristics of flame, including color clues, flame movement and flame flicker are incorporated into the scheme to detect fires in color video frames. Firstly, mean filtering was used to smooth the video frames and a flame color filtering algorithm was adopted to extract flame candidate regio...

متن کامل

A Bionic Method of Moving Object Detection with Multi- feature Fusion Based On Frog Vision Characteristics

In the complex natural background, the image features of moving objects usually change severely. And the kinematics and morphological features of dynamic target are unconspicuous due to the fast movement, unpredictable kinetic law and the accompanied scale transformation. The methods of motion detection based on one single morphological, statistics or kinetic features would not meet the require...

متن کامل

Face Verification with Multi-Task and Multi-Scale Feature Fusion

Face verification for unrestricted faces in the wild is a challenging task. This paper proposes a method based on two deep convolutional neural networks (CNN) for face verification. In this work, we explore using identification signals to supervise one CNN and the combination of semi-verification and identification to train the other one. In order to estimate semi-verification loss at a low com...

متن کامل

Multi-Layer Model Based on Multi-Scale and Multi-Feature Fusion for SAR Images

A multi-layer classification approach based on multi-scales and multi-features (ML–MFM) for synthetic aperture radar (SAR) images is proposed in this paper. Firstly, the SAR image is partitioned into superpixels, which are local, coherent regions that preserve most of the characteristics necessary for extracting image information. Following this, a new sparse representation-based classification...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2022

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2022.3185986